12 research outputs found

    Rich-Club Organization in Effective Connectivity among Cortical Neurons.

    Get PDF
    The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a “rich club.” We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory.SIGNIFICANCE STATEMENT Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, ∼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases

    DynaMaR: Dynamic Prompt with Mask Token Representation

    Full text link
    Recent research has shown that large language models pretrained using unsupervised approaches can achieve significant performance improvement on many downstream tasks. Typically when adapting these language models to downstream tasks, like a classification or regression task, we employ a fine-tuning paradigm in which the sentence representation from the language model is input to a task-specific head; the model is then fine-tuned end-to-end. However, with the emergence of models like GPT-3, prompt-based fine-tuning has been proven to be a successful approach for few-shot tasks. Inspired by this work, we study discrete prompt technologies in practice. There are two issues that arise with the standard prompt approach. First, it can overfit on the prompt template. Second, it requires manual effort to formulate the downstream task as a language model problem. In this paper, we propose an improvement to prompt-based fine-tuning that addresses these two issues. We refer to our approach as DynaMaR -- Dynamic Prompt with Mask Token Representation. Results show that DynaMaR can achieve an average improvement of 10% in few-shot settings and improvement of 3.7% in data-rich settings over the standard fine-tuning approach on four e-commerce applications

    Analysis diagram and partial information decomposition.

    No full text
    <p><b>(A)</b> We used a custom built 512-electrode array to record spiking activity from cortico-hippocampal organotypic cultures. We then used transfer entropy to detect effective connectivity among the recorded neurons. Finally, we studied the topology and the two-input computations in these networks. We also examined bounds on higher-order computation. <b>(B)</b> To study two-input computations, we used multivariate transfer entropy to deconstruct traditional transfer entropy measures into synergy, redundancy, and unique information terms [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004858#pcbi.1004858.ref004" target="_blank">4</a>]. Specifically, two-input computations were measured using the synergistic information computed for the system of two neurons sending significant amounts of information (as measured by transfer entropy) to a third neuron (see <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004858#sec002" target="_blank">Materials and Methods</a> – <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004858#sec007" target="_blank">Multivariate Transfer Entropy and Computation</a>).</p

    A degree-modified Hebbian rule qualitatively matched results from the original data and balanced reinforcement of network-wide activity with neuron-to-neuron communication.

    No full text
    <p><b>(A)</b> Structure of the simple feedforward network model. Note that neurons with high indexes were more strongly correlated with the binary signal b(t). <b>(B)</b> Connectivity probability diagrams before and after network rewiring. Probabilities were averaged across 100 models. Note that the Hebbian rule pooled all the connections between neurons with strong correlations to the binary signal, while the degree-modified Hebbian rule preserved many connections from input layer neurons that were not highly correlated with the binary signal. <b>(C)</b> Degree vs. synergy correlation values for models and the real biological data. Note that the modified Hebbian rules qualitatively reproduced the correlation pattern seen in the real data. (Light dots represent individual models or recordings, dark dots represent mean value, and bars represent standard deviation) <b>(D)</b> Distributions of average mutual information between connected neurons (unconditioned (top) and conditioned on the binary signal (bottom)) across models. Note that the degree-modified Hebbian model showed higher mutual information after the effects of the common binary signal were removed. (mean value, bars represent standard deviation, Mann-Whitney rank-sum test (three dots: p < 0.001), False Discovery Rate Control [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004858#pcbi.1004858.ref096" target="_blank">96</a>–<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004858#pcbi.1004858.ref098" target="_blank">98</a>]) <b>(E)</b> Though all rewiring methods increased the mutual information between connected pairs (which reinforces the common network activity defined by the binary signal), the degree-modified Hebbian rule also increased mutual information between connected neurons (neuron-to-neuron communication) independent of common network activity.</p

    Transfer entropy and degree distributions.

    No full text
    <p><b>(A and C)</b> Transfer entropy distributions for raw TE and normalized TE across the two time scales studied in this analysis (interactions with delays of 1.6–6.4 ms (A) and 3.5–14 ms (B)). Note that all distributions are roughly log-normal (see <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004858#pcbi.1004858.e022" target="_blank">Eq 21</a>, nonlinear regression performed in Matlab). Solid line is average of all recordings; shaded region represents ± one standard deviation across recordings. <b>(B and D)</b> In, out, and total degree distributions from the real data and total degree distributions from random networks with matching numbers of neurons, connections, and sampling statistics to the real data. Note that because the total degree distribution from the real data extends far beyond the distribution from the random networks, the real data are heavy-tailed. We did not assess whether the degree distributions were scale-free due to issues surrounding sub-sampling [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004858#pcbi.1004858.ref093" target="_blank">93</a>]. Also, note that the in-degree distribution had a shorter tail than the out-degree distribution, indicating that there were more high out-degree neurons than there were high in-degree neurons. Solid line is average of all recordings; shaded region represents ± one standard deviation across recordings.</p

    Example multivariate TE interactions.

    No full text
    <p>The PID multivariate TE is able to dissect different types of computations performed by one receiving neuron (I) with two transmitting neurons (J and K). Unique information is the portion of the information provided by one transmitter alone, redundancy is the portion of information provided by both transmitters, and synergy is the portion provided only by the combined input of both transmitters. Note that mutual information <i>MI</i>(<i>J</i><sub><i>P</i></sub>; <i>I</i><sub><i>F</i></sub>) does not detect common drive from the history of the receiving neuron (Hidden Self Interaction Example, red X). Note that the interaction information <i>II</i>(<i>J</i><sub><i>P</i></sub>; <i>K</i><sub><i>P</i></sub>; <i>I</i><sub><i>F</i></sub>) is not able to detect simultaneous synergy and redundancy (Synergistic and Redundant Interaction Example, red X) and it is not able to detect unique information (Single and Redundant Interaction Example).</p

    Degree-dependent computation.

    No full text
    <p><b>(A and B)</b> Histograms across recordings of correlations between synergy (computation) and receiver in-degree (A) and between synergy (computation) and transmitter out-degree (B) (N<sub>data</sub> = 40). Also shown is the skew towards positive or negative correlation values along with the likelihood to observe a skew of that magnitude or larger under the assumption positive and negative correlation values are equally likely (binomial cdf with p<sub>pos</sub> = p<sub>neg</sub> = 0.5). Nearly all correlations were likely to be significant given the proximity of the null model correlations (no correlation) to zero (null model consisted of randomized degree/computation pairings) (N<sub>null</sub> = 400). Histogram bin size optimized using methods established in [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004858#pcbi.1004858.ref095" target="_blank">95</a>]. <b>(C and D)</b> Distributions of synergy values (computation) vs. degree averaged across all recordings. These plots show similar effects to (A and B). Solid line represents the median value; shaded region represents 1<sup>st</sup> quartile to 3<sup>rd</sup> quartile. Only degrees with 20 or more neuron groups are shown, so lower degrees, which had more neuron groupings, had a greater influence on correlation calculations in (A and B). Also, note that the in-degree distribution showed a shorter tail than the out-degree distribution (<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1004858#pcbi.1004858.g004" target="_blank">Fig 4B and 4D</a>), so it was not possible to extend the computation performed plot to high in-degrees. <b>(E and F)</b> Explanatory computation performed (E) and contribution to computation (F) networks. In (E), notice that all neurons compute the same amount of information, but that in (F), neurons with high out-degrees contribute more information to computations. Dot size represents the median values from the matching degree in (C). This shows that computation was uncorrelated with the in-degree of the receiver neuron, but was correlated with the out-degree of the transmitter neuron.</p

    Computations were performed by neurons using information about the spiking state of other neurons.

    No full text
    <p>Many previous studies have examined the ability of neurons to compute information about stimuli via functional connections from those stimuli to cortical neurons (green arrows). In our analysis, we examined the computations performed by neurons about the spiking states of functionally connected neurons (blue arrows). Also, note that our experimental system is <i>ex vivo</i> and we analyzed spontaneous activity.</p
    corecore